23 research outputs found

    Lyra: Password-Based Key Derivation with Tunable Memory and Processing Costs

    Get PDF
    We present Lyra, a password-based key derivation scheme based on cryptographic sponges. Lyra was designed to be strictly sequential (i.e., not easily parallelizable), providing strong security even against attackers that use multiple processing cores (e.g., custom hardware or a powerful GPU). At the same time, it is very simple to implement in software and allows legitimate users to fine-tune its memory and processing costs according to the desired level of security against brute force password guessing. We compare Lyra with similar-purpose state-of-the-art solutions, showing how our proposal provides a higher security level and overcomes limitations of existing schemes. Specfically, we show that if we fix Lyra\u27s total processing time t in a legitimate platform, the cost of a memory-free attack against the algorithm is exponential, while the best known result in the literature (namely, against the scrypt algorithm) is quadratic. In addition, for an identical same processing time, Lyra allows for a higher memory usage than its counterparts, further increasing the cost of brute force attacks

    Measuring universal health coverage based on an index of effective coverage of health services in 204 countries and territories, 1990–2019: a systematic analysis for the Global Burden of Disease Study 2019

    Get PDF
    Background Achieving universal health coverage (UHC) involves all people receiving the health services they need, of high quality, without experiencing financial hardship. Making progress towards UHC is a policy priority for both countries and global institutions, as highlighted by the agenda of the UN Sustainable Development Goals (SDGs) and WHO's Thirteenth General Programme of Work (GPW13). Measuring effective coverage at the health-system level is important for understanding whether health services are aligned with countries' health profiles and are of sufficient quality to produce health gains for populations of all ages. Methods Based on the Global Burden of Diseases, Injuries, and Risk Factors Study (GBD) 2019, we assessed UHC effective coverage for 204 countries and territories from 1990 to 2019. Drawing from a measurement framework developed through WHO's GPW13 consultation, we mapped 23 effective coverage indicators to a matrix representing health service types (eg, promotion, prevention, and treatment) and five population-age groups spanning from reproductive and newborn to older adults (≄65 years). Effective coverage indicators were based on intervention coverage or outcome-based measures such as mortality-to-incidence ratios to approximate access to quality care; outcome-based measures were transformed to values on a scale of 0–100 based on the 2·5th and 97·5th percentile of location-year values. We constructed the UHC effective coverage index by weighting each effective coverage indicator relative to its associated potential health gains, as measured by disability-adjusted life-years for each location-year and population-age group. For three tests of validity (content, known-groups, and convergent), UHC effective coverage index performance was generally better than that of other UHC service coverage indices from WHO (ie, the current metric for SDG indicator 3.8.1 on UHC service coverage), the World Bank, and GBD 2017. We quantified frontiers of UHC effective coverage performance on the basis of pooled health spending per capita, representing UHC effective coverage index levels achieved in 2019 relative to country-level government health spending, prepaid private expenditures, and development assistance for health. To assess current trajectories towards the GPW13 UHC billion target—1 billion more people benefiting from UHC by 2023—we estimated additional population equivalents with UHC effective coverage from 2018 to 2023. Findings Globally, performance on the UHC effective coverage index improved from 45·8 (95% uncertainty interval 44·2–47·5) in 1990 to 60·3 (58·7–61·9) in 2019, yet country-level UHC effective coverage in 2019 still spanned from 95 or higher in Japan and Iceland to lower than 25 in Somalia and the Central African Republic. Since 2010, sub-Saharan Africa showed accelerated gains on the UHC effective coverage index (at an average increase of 2·6% [1·9–3·3] per year up to 2019); by contrast, most other GBD super-regions had slowed rates of progress in 2010–2019 relative to 1990–2010. Many countries showed lagging performance on effective coverage indicators for non-communicable diseases relative to those for communicable diseases and maternal and child health, despite non-communicable diseases accounting for a greater proportion of potential health gains in 2019, suggesting that many health systems are not keeping pace with the rising non-communicable disease burden and associated population health needs. In 2019, the UHC effective coverage index was associated with pooled health spending per capita (r=0·79), although countries across the development spectrum had much lower UHC effective coverage than is potentially achievable relative to their health spending. Under maximum efficiency of translating health spending into UHC effective coverage performance, countries would need to reach 1398pooledhealthspendingpercapita(US1398 pooled health spending per capita (US adjusted for purchasing power parity) in order to achieve 80 on the UHC effective coverage index. From 2018 to 2023, an estimated 388·9 million (358·6–421·3) more population equivalents would have UHC effective coverage, falling well short of the GPW13 target of 1 billion more people benefiting from UHC during this time. Current projections point to an estimated 3·1 billion (3·0–3·2) population equivalents still lacking UHC effective coverage in 2023, with nearly a third (968·1 million [903·5–1040·3]) residing in south Asia. Interpretation The present study demonstrates the utility of measuring effective coverage and its role in supporting improved health outcomes for all people—the ultimate goal of UHC and its achievement. Global ambitions to accelerate progress on UHC service coverage are increasingly unlikely unless concerted action on non-communicable diseases occurs and countries can better translate health spending into improved performance. Focusing on effective coverage and accounting for the world's evolving health needs lays the groundwork for better understanding how close—or how far—all populations are in benefiting from UHC

    Erratum: Global, regional, and national comparative risk assessment of 84 behavioural, environmental and occupational, and metabolic risks or clusters of risks for 195 countries and territories, 1990–2017: a systematic analysis for the Global Burden of Disease Study 2017

    Get PDF
    Interpretation: By quantifying levels and trends in exposures to risk factors and the resulting disease burden, this assessment offers insight into where past policy and programme efforts might have been successful and highlights current priorities for public health action. Decreases in behavioural, environmental, and occupational risks have largely offset the effects of population growth and ageing, in relation to trends in absolute burden. Conversely, the combination of increasing metabolic risks and population ageing will probably continue to drive the increasing trends in non-communicable diseases at the global level, which presents both a public health challenge and opportunity. We see considerable spatiotemporal heterogeneity in levels of risk exposure and risk-attributable burden. Although levels of development underlie some of this heterogeneity, O/E ratios show risks for which countries are overperforming or underperforming relative to their level of development. As such, these ratios provide a benchmarking tool to help to focus local decision making. Our findings reinforce the importance of both risk exposure monitoring and epidemiological research to assess causal connections between risks and health outcomes, and they highlight the usefulness of the GBD study in synthesising data to draw comprehensive and robust conclusions that help to inform good policy and strategic health planning

    Redactable Blockchain - Or - Rewriting History in Bitcoin and Friends

    Get PDF
    We put forward a new framework that makes it possible to re-write or compress the content of any number of blocks in decentralized services exploiting the blockchain technology. As we argue, there are several reasons to prefer an editable blockchain, spanning from the necessity to remove inappropriate content and the possibility to support applications requiring re-writable storage, to 'the right to be forgotten.' Our approach generically leverages so-called chameleon hash functions (Krawczyk and Rabin, NDSS '00), which allow determining hash collisions efficiently, given a secret trapdoor information. We detail how to integrate a chameleon hash function in virtually any blockchain-based technology, for both cases where the power of redacting the blockchain content is in the hands of a single trusted entity and where such a capability is distributed among several distrustful parties (as is the case with Bitcoin). We also report on a proof-of-concept implementation of a redactable blockchain, building on top of Nakamoto's Bitcoin core. The prototype only requires minimal changes to the way current client software interprets the information stored in the blockchain and to the current blockchain, block, or transaction structures. Moreover, our experiments show that the overhead imposed by a redactable blockchain is small compared to the case of an immutable on

    Amazon Biobank: Assessing the Implementation of a Blockchain-Based Genomic Database

    No full text
    The bioeconomy, an industrial production model based on biological resources and sustainable development, can be considered an emerging opportunity for biodiversity-abundant regions, such as the Amazon rainforest. However, existing genomic repositories lack data traceability and economic benefit-sharing mechanisms, resulting in limited motivation for data providers to contribute. To address this challenge, we present an implementation of Amazon Biobank, a community-driven genetic database. By leveraging blockchain and peer-to-peer (P2P) technologies, we enable distributed and transparent data sharing; meanwhile, by using smart contracts directly registered in the system, we enforce fair benefit-sharing among all system participants. Moreover, Amazon Biobank is designed to be auditable by any user, reducing the need for trusted system managers. This paper aims to validate this model, by describing the implementation of a prototype using Hyperledger Fabric and BitTorrent and evaluating its performance. Our results show that the prototype can support at least 400 transactions per second in a small network, a number that can be further improved by adding new nodes or allocating additional computational resources. We conclude that the Amazon Biobank proposal is technically feasible, and its real-world deployment has the potential to foster sustainable development in high-biodiversity regions, in addition to promoting collaborative biotechnology research

    DUNE Offline Computing Conceptual Design Report

    No full text
    This document describes Offline Software and Computing for the Deep Underground Neutrino Experiment (DUNE) experiment, in particular, the conceptual design of the offline computing needed to accomplish its physics goals. Our emphasis in this document is the development of the computing infrastructure needed to acquire, catalog, reconstruct, simulate and analyze the data from the DUNE experiment and its prototypes. In this effort, we concentrate on developing the tools and systems that facilitate the development and deployment of advanced algorithms. Rather than prescribing particular algorithms, our goal is to provide resources that are flexible and accessible enough to support creative software solutions as HEP computing evolves and to provide computing that achieves the physics goals of the DUNE experiment.This document describes the conceptual design for the Offline Software and Computing for the Deep Underground Neutrino Experiment (DUNE). The goals of the experiment include 1) studying neutrino oscillations using a beam of neutrinos sent from Fermilab in Illinois to the Sanford Underground Research Facility (SURF) in Lead, South Dakota, 2) studying astrophysical neutrino sources and rare processes and 3) understanding the physics of neutrino interactions in matter. We describe the development of the computing infrastructure needed to achieve the physics goals of the experiment by storing, cataloging, reconstructing, simulating, and analyzing ∌\sim 30 PB of data/year from DUNE and its prototypes. Rather than prescribing particular algorithms, our goal is to provide resources that are flexible and accessible enough to support creative software solutions and advanced algorithms as HEP computing evolves. We describe the physics objectives, organization, use cases, and proposed technical solutions

    Highly-parallelized simulation of a pixelated LArTPC on a GPU

    No full text
    The rapid development of general-purpose computing on graphics processing units (GPGPU) is allowing the implementation of highly-parallelized Monte Carlo simulation chains for particle physics experiments. This technique is particularly suitable for the simulation of a pixelated charge readout for time projection chambers, given the large number of channels that this technology employs. Here we present the first implementation of a full microphysical simulator of a liquid argon time projection chamber (LArTPC) equipped with light readout and pixelated charge readout, developed for the DUNE Near Detector. The software is implemented with an end-to-end set of GPU-optimized algorithms. The algorithms have been written in Python and translated into CUDA kernels using Numba, a just-in-time compiler for a subset of Python and NumPy instructions. The GPU implementation achieves a speed up of four orders of magnitude compared with the equivalent CPU version. The simulation of the current induced on 10310^3 pixels takes around 1 ms on the GPU, compared with approximately 10 s on the CPU. The results of the simulation are compared against data from a pixel-readout LArTPC prototype
    corecore